cognitive science
Is everything a projection?: Materialists
The hallucination machine In 2013, Anil Seth told a TED audience: "We’re all hallucinating all the time. When we agree about our hallucinations, we call it reality." He was summarizing thirty years of computational neuroscience into a sentence, and the sentence landed because it... What is consciousness?: Mysterians
The vertigo In 1983, Colin McGinn was reading Nagel’s "What Is It Like to Be a Bat?" for the fourth time and had the philosophical equivalent of vertigo. A bat perceives through echolocation — experience so alien no neuroscience could let a human know what it is like.... What is consciousness?: Materialists
The iron rod In 1848, Phineas Gage survived an iron rod blasting through his frontal lobe and became a different person. Responsible Gage became impulsive, profane, unable to hold a job. His skull is in a museum at Harvard.... What's actually happening with AI?: Rationalists
The number that should bother everyone Zero. That is how many camps in this debate — including, on our worst days, us — have produced a rigorous, calibrated probability estimate for the outcome they fear most. The accelerationists assert civilizational flourishing.... 𝗕𝗲𝘆𝗼𝗻𝗱 𝗛𝘂𝗺𝗮𝗻 𝗧𝗿𝘂𝘁𝗵: 𝗘𝘅𝗽𝗮𝗻𝗱𝗶𝗻𝗴 𝘁𝗵𝗲 𝗕𝗼𝘂𝗻𝗱𝗮𝗿𝗶𝗲𝘀 𝗼𝗳 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴
For centuries, science and philosophy have been humanity’s tools for making sense of the world. These disciplines have led us to astonishing discoveries, from the intricate structure of DNA to the far reaches of the cosmos.... When the Mind Rewrites Reality How Bias Snowballs Grow Into Illusions
We like to think we see the world clearly, like we are noticing what is really happening. But a lot of the time our minds are quietly shaping what we notice and what we ignore.... The Cost of Letting main stream media and social media Do Our Thinking. Lately I’ve been thinking about how both the political left and right are pushing narratives through social media, and a lot of what’s being shared is made up of half-truths or no truth at all. It feels like emotions are being intentionally poked and prodded to build followers around ideologies, not facts.
Honestly, you can’t even scroll social media anymore without stopping to ask yourself, “Is this actually true?” And that the norm now.
Before you can even consider the message, you have to research it just to figure out if it’s real. That alone tells me things are out of control.
What worries me most is how much of this stuff gets absorbed emotionally. A lot of people don’t consciously assess what they believe or take the time to verify it. If something aligns with how they feel, it gets accepted and then repeated.
Sometimes something goes viral almost instantly and gets accepted as truth, whether it’s fact or fiction, simply because it hits people emotionally.
And I get it. When something hits you emotionally and connects to a belief you already have, human nature is to accept it as truth, because our own biases want us to believe it.
If this keeps going, I really think it damages our ability to function as a country, because we lose a shared understanding of what’s real and what isn’t. Everything becomes narrative instead of truth.
I think part of the problem is that we’re becoming mentally lazy. We stop thinking critically and let confirmation bias run unchecked, and it just keeps building on itself.
The solution is simple, even if it’s not easy. Slow down. Question what we’re seeing. Separate facts from feelings. Think logically before reacting emotionally. Truth shouldn’t depend on which side it benefits.
Just something I’ve been thinking about.
v/r Russ
www.linkedin.com/in/russellclarkwyThis connects to something I think we underestimate: attention is a finite resource, and systems exploit that. People cannot fight on ten fronts at once. They cannot fact-check nonstop, stay outraged nonstop, and still build anything durable.... AMA with Nate Soares. Wednesday 2/4 at 10am CT
Author of If Anyone Builds It, Everyone Dies answers questions about why superhuman AI would kill us all.
For sure. I have heavy doubts that modern technology can even in the short term achieve "general" intelligence. We barely understand our own minds, it's difficult to think that we can intentionally create it. I love your comment on memory being tied to experimentation.... AMA with Nate Soares. Wednesday 2/4 at 10am CT
Author of If Anyone Builds It, Everyone Dies answers questions about why superhuman AI would kill us all.
I assume that "AGI" here means "Artificial General Intelligence" rather than "Adjusted Gross Income". I'm not afraid of AGI being anything of significance.... AMA with Nate Soares. Wednesday 2/4 at 10am CT
Author of If Anyone Builds It, Everyone Dies answers questions about why superhuman AI would kill us all.
The word "alignment" has been reduced to meaning something like "it won't give you a meth recipe unless you know how to jailbreak it", and the companies go around saying things like "look how aligned our model is" in a way that is mostly about this property and not very much... Free Will is an Incoherent Concept
Free will is defined as the ability to make choices that have no dependency on anything in the universe. Literally not anything: not your past, not your experience, not your knowledge, not your upbringing, not the people who are around you, not your culture: freedom depends on... My Journey with Claude Code. The more I use Claude Code, the more impressed I become.
I keep throwing progressively harder problems at it, and whenever the problem is conceptually tractable, it can usually just solve it. Not with hacks or brittle workarounds, but by actually engaging with the structure of the problem.
I decided to push it further by combining two difficult problems.
Difficult in the sense that either one would likely take me years to complete properly on my own let alone together whether the interaction between the two is a kind of complexity in its own right.
With the exception of one genuinely catastrophic error that required intervention to recover from, the tool has kept going, iterating, and making real progress.
What stands out most is that it seems to understand what progress actually is.
It does not treat the number of passing tests as a sacred metric. It is willing to break tests if that moves the system forward in a deeper, more honest way. That is something many humans struggle with.
The mere fact that it can reason about progress at all, rather than optimising a superficial proxy for it, is pretty remarkable
And to think the tool over past few months has seen pretty consistent improvement at the cadence of weeks with no end in sight.
Will software development be unrecognisable a year or even 6 months from now, I do not know.
Can you add a bit more color to the nature of the two problems? I love the sense of it recognizing a deeper sense of what "progress" means. I've noticed something similar: I can 'reason' with the LLMs on complex concepts using metaphors in a precise way to tune and re-tune my... THE OVERLOOKED PROBLEM WITH LLM CREATING AGI
Epistemically Contextual Chaos: The problem isn't just contextual, it's epistemically chaotic. The fact is, we CONTROL the information AIs get. Even if we lose the details in its development, what sort of information an AI has, it only has, because we found specific ideas...